[Reply to ConceptJunkie]
Either it meets the high standards required by the kernel team or it doesn't.
That binary sounds great until you remember what “standards” actually means in kernel-land. It’s not just “passes tests” or “meets the spec.” The spec is the easy part. The standards also include: does it fit the subsystem’s design, does it avoid cleverness debt, does it behave across a zoo of arches/configs, does it keep the fast path fast, and can future maintainers reason about it without taking up candlelit lockdep seances.
Also: kernel review isn’t a theorem prover. It’s a risk-management pipeline with finite reviewer attention. “Meets the standards” is often a judgment call made under time pressure, not a formally verified conclusion.
It doesn't matter if it was written by AI, aliens or Linus himself.
It matters a lot, and you accidentally picked the perfect trio to prove it.
If Linus writes it, Linus can explain it, defend it on the list, revise it when a maintainer says “no, not like that,” and own the fallout for the next decade. Aliens can’t answer review questions. An LLM can’t show up on LKML and say “good catch, here’s why I chose this memory barrier, and here’s the perf data on Zen4 vs Graviton.” Origin matters because accountability matters.
The kernel doesn’t merge diffs. It merges an ongoing relationship with an author who can justify tradeoffs and do follow-up when reality punches the patch in the face. That’s not politics, that’s maintenance. And it’s exactly why the proposed guidance keeps circling around transparency and “make it easy to review,” rather than pretending there’s a magic AI detector.
I use AI tools when coding and I've used it to generate code at times, but I read through it with a fine-toothed comb, test it thoroughly, and don't commit anything I don't 100% understand.
Good. That’s the only sane way to use any generator, including StackOverflow and “I found this gist on a blog from 2013.”
But “100% understand” is where the wheels start to come off. You think you tested it thoroughly, and it is this kind of innocent arrogance that stops a coding career in its tracks. In kernel code, you can understand what the lines say and still miss what they do when the scheduler, the memory model, the compiler, and three architectures start arguing in the hallway. Races, refcount lifetimes, RCU subtleties, error paths, and performance cliffs do not politely announce themselves during your “thorough testing.” Even experts rely on collective review, fuzzing, CI farms, and years of scar tissue because humans are not exhaustive-state-space machines. To claim you you've thoroughly tested anything is arrogance, not expertise. And here’s the AI-specific twist: LLMs are great at producing code that looks like something a careful person would write. That’s not “slop.” That’s plausibly-correct code that can sail through casual review and still be wrong in the exact corner you didn’t think to test. The dangerous patches are the ones that look boring.
I think anyone working on the kernel is easily capable of the same thing.
I think you’re describing the top slice of kernel contributors and then declaring policy based on the best-case. The kernel also has drive-by patches, corporate throughput patches, newbie patches, and “I fixed my one bug, I give zero shits about the downstream" patches. I'm guilty of this last one; I connected a 7.1 surround system via HDMI to my 4090, and watched my display go dark. Why did I have to go under the hood? Because EDID—the ancient, flaky, and apparently immortal Display Data Channel protocol HDMI still uses— for the sake of compatibility lets any device scream "I'm a monitor!" loudly enough to hijack your system, even if your main display is DP. The kernel insisted that the EDID signal was the main display, so it kept trying to throw the desktop to the phantom VGA display that EDID said existed on my LG soundbar. I couldn't reprogram the soundbar; instead I did brain surgery on drivers/gpu/drm/drm_edid.c to filter out my particular soundbar. Works for me; absolutely will not work for 99.99% of the rest of the planet. I'm a sysadmin, not a coder, but I do have a BS in CS, and I can code when I have to. Kernel maintainers could obviously come up with a better hack than I did, but this is a corner case that will never be loud enough to draw the main line's attention. I had to lean heavily on an LLM to even diagnose this problem; but once it converged, it gave me the root cause, and even helpfully suggested the code I needed to patch into the kernel. It took a few tries -- my C fu was never strong, but it *worked.*
In a bucket, if you want an AI posture that’s actually compatible with kernel reality, it’s this:
If you can’t explain it, you don’t own it.
If you don’t own it, it doesn’t belong in mainline.
Tools are fine. Epistemic outsourcing is not.